# Causal Language Model
Myrrh Solar 10.7b 3.0
Apache-2.0
A large language model in the medical field developed by MoAData, trained on a self-made medical dataset using the DPO method.
Large Language Model
Transformers Korean

M
MoaData
15.39k
3
Mamba 790m Hf
Mamba is an efficient sequence model compatible with transformers, with 790 million parameters, suitable for causal language modeling tasks.
Large Language Model
Transformers

M
state-spaces
6,897
4
Open Calm 3b
OpenCALM is a 3B-parameter version of the decoder-only language model series developed by CyberAgent, pretrained on Japanese datasets.
Large Language Model
Transformers Japanese

O
cyberagent
850
20
Pythia 1b
Apache-2.0
Pythia-1B is a language model specialized for interpretability research developed by EleutherAI, belonging to the 1-billion-parameter version in the Pythia suite, trained on The Pile dataset.
Large Language Model
Transformers English

P
EleutherAI
79.69k
38
Pythia 6.9b
Apache-2.0
Pythia-6.9B is a large-scale language model developed by EleutherAI, part of the Pythia scalable suite, specifically designed to facilitate interpretability research.
Large Language Model
Transformers English

P
EleutherAI
46.72k
54
Pythia 410m
Apache-2.0
Pythia is a series of causal language models developed by EleutherAI, specifically designed for interpretability research. It includes 8 model sizes ranging from 70 million to 12 billion parameters, providing 154 training checkpoints.
Large Language Model
Transformers English

P
EleutherAI
83.28k
25
Pythia 1.4b
Apache-2.0
Pythia-1.4B is a 1.2 billion parameter causal language model developed by EleutherAI, part of the Pythia scale suite, specifically designed for interpretability research.
Large Language Model
Transformers English

P
EleutherAI
60.98k
23
Skillet
A GPT-2 language model fine-tuned on Skillet band lyrics for generating similar style text content
Large Language Model English
S
huggingartists
24
0
Featured Recommended AI Models